6,390 research outputs found

    Deterministic Construction of Binary Measurement Matrices with Various Sizes

    Full text link
    We introduce a general framework to deterministically construct binary measurement matrices for compressed sensing. The proposed matrices are composed of (circulant) permutation submatrix blocks and zero submatrix blocks, thus making their hardware realization convenient and easy. Firstly, using the famous Johnson bound for binary constant weight codes, we derive a new lower bound for the coherence of binary matrices with uniform column weights. Afterwards, a large class of binary base matrices with coherence asymptotically achieving this new bound are presented. Finally, by choosing proper rows and columns from these base matrices, we construct the desired measurement matrices with various sizes and they show empirically comparable performance to that of the corresponding Gaussian matricesComment: 5 pages, 3 figure

    Alternating direction algorithms for β„“0\ell_0 regularization in compressed sensing

    Full text link
    In this paper we propose three iterative greedy algorithms for compressed sensing, called \emph{iterative alternating direction} (IAD), \emph{normalized iterative alternating direction} (NIAD) and \emph{alternating direction pursuit} (ADP), which stem from the iteration steps of alternating direction method of multiplier (ADMM) for β„“0\ell_0-regularized least squares (β„“0\ell_0-LS) and can be considered as the alternating direction versions of the well-known iterative hard thresholding (IHT), normalized iterative hard thresholding (NIHT) and hard thresholding pursuit (HTP) respectively. Firstly, relative to the general iteration steps of ADMM, the proposed algorithms have no splitting or dual variables in iterations and thus the dependence of the current approximation on past iterations is direct. Secondly, provable theoretical guarantees are provided in terms of restricted isometry property, which is the first theoretical guarantee of ADMM for β„“0\ell_0-LS to the best of our knowledge. Finally, they outperform the corresponding IHT, NIHT and HTP greatly when reconstructing both constant amplitude signals with random signs (CARS signals) and Gaussian signals.Comment: 16 pages, 1 figur

    Nonextensive information theoretical machine

    Full text link
    In this paper, we propose a new discriminative model named \emph{nonextensive information theoretical machine (NITM)} based on nonextensive generalization of Shannon information theory. In NITM, weight parameters are treated as random variables. Tsallis divergence is used to regularize the distribution of weight parameters and maximum unnormalized Tsallis entropy distribution is used to evaluate fitting effect. On the one hand, it is showed that some well-known margin-based loss functions such as β„“0/1\ell_{0/1} loss, hinge loss, squared hinge loss and exponential loss can be unified by unnormalized Tsallis entropy. On the other hand, Gaussian prior regularization is generalized to Student-t prior regularization with similar computational complexity. The model can be solved efficiently by gradient-based convex optimization and its performance is illustrated on standard datasets

    Bayesian linear regression with Student-t assumptions

    Full text link
    As an automatic method of determining model complexity using the training data alone, Bayesian linear regression provides us a principled way to select hyperparameters. But one often needs approximation inference if distribution assumption is beyond Gaussian distribution. In this paper, we propose a Bayesian linear regression model with Student-t assumptions (BLRS), which can be inferred exactly. In this framework, both conjugate prior and expectation maximization (EM) algorithm are generalized. Meanwhile, we prove that the maximum likelihood solution is equivalent to the standard Bayesian linear regression with Gaussian assumptions (BLRG). The qq-EM algorithm for BLRS is nearly identical to the EM algorithm for BLRG. It is showed that qq-EM for BLRS can converge faster than EM for BLRG for the task of predicting online news popularity

    Johnson Type Bounds on Constant Dimension Codes

    Full text link
    Very recently, an operator channel was defined by Koetter and Kschischang when they studied random network coding. They also introduced constant dimension codes and demonstrated that these codes can be employed to correct errors and/or erasures over the operator channel. Constant dimension codes are equivalent to the so-called linear authentication codes introduced by Wang, Xing and Safavi-Naini when constructing distributed authentication systems in 2003. In this paper, we study constant dimension codes. It is shown that Steiner structures are optimal constant dimension codes achieving the Wang-Xing-Safavi-Naini bound. Furthermore, we show that constant dimension codes achieve the Wang-Xing-Safavi-Naini bound if and only if they are certain Steiner structures. Then, we derive two Johnson type upper bounds, say I and II, on constant dimension codes. The Johnson type bound II slightly improves on the Wang-Xing-Safavi-Naini bound. Finally, we point out that a family of known Steiner structures is actually a family of optimal constant dimension codes achieving both the Johnson type bounds I and II.Comment: 12 pages, submitted to Designs, Codes and Cryptograph

    Statistical Inference Attack Against PHY-layer Key Extraction and Countermeasures

    Full text link
    The formal theoretical analysis on channel correlations in both real indoor and outdoor environments are provided in this paper. Moreover, this paper studies empirical statistical inference attacks (SIA) against LSB key extraction, whereby an adversary infers the signature of a target link. Consequently, the secret key extracted from that signature has been recovered by observing the surrounding links. Prior work assumes theoretical link-correlation models for the inference, in contrast, our study does not make any assumption on link correlation. Instead, we take machine learning (ML) methods for link inference based on empirically measured link signatures. ML algorithms have been developed to launch SIAs under various realistic scenarios. Our experimental results have shown that the proposed inference algorithms are still quite effective even without making assumptions on link correlation. In addition, our inference algorithms can reduce the key search space by many orders of magnitudes compared to brute force search. We further propose a countermeasure against the statistical inference attacks, FBCH (forward-backward cooperative key extraction protocol with helpers). In the FBCH, helpers (other trusted wireless nodes) are introduced to provide more randomness in the key extraction. Our experiment results verify the effectiveness of the proposed protocol

    Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes

    Full text link
    In this correspondence, we study the minimum pseudo-weight and minimum pseudo-codewords of low-density parity-check (LDPC) codes under linear programming (LP) decoding. First, we show that the lower bound of Kelly, Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC code with girth greater than 4 is tight if and only if this pseudo-codeword is a real multiple of a codeword. Then, we show that the lower bound of Kashyap and Vardy on the stopping distance of an LDPC code is also a lower bound on the pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this lower bound is tight if and only if this pseudo-codeword is a real multiple of a codeword. Using these results we further show that for some LDPC codes, there are no other minimum pseudo-codewords except the real multiples of minimum codewords. This means that the LP decoding for these LDPC codes is asymptotically optimal in the sense that the ratio of the probabilities of decoding errors of LP decoding and maximum-likelihood decoding approaches to 1 as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are listed to illustrate these results.Comment: 17 pages, 1 figur

    Post-ACME2013 CP-violation in Higgs Physics and Electroweak Baryogenesis

    Full text link
    We present a class of cancellation mechanisms to suppress the total contributions of Barr-Zee diagrams to the electron electric dipole moment (eEDM). This class of mechanisms are of particular significance after the new eEDM upper limit, which strongly constrains the allowed magnitude of CP-violation in Higgs couplings and hence the feasibility of electroweak baryogenesis (EWBG), were released by the ACME collaboration in 2013. We point out: if both the CP-odd Higgs-photon-photon (ZZ boson) and the CP-odd Higgs-electron-positron couplings are turned on, a cancellation may occur either between the contributions of a CP-mixing Higgs boson, with the other Higgs bosons being decoupled, or between the contributions of a CP-even and a CP-odd Higgs bosons. With the assistance of the cancellation mechanisms, a large CP-phase in Higgs couplings with viable electroweak baryogenesis (EWBG) is still allowed. The reopened parameter regions would be probed by the future neutron, mercury EDM measurements, and direct measurements of Higgs CP-properties at the LHC and future colliders.Comment: 9 pages, 4 figures, 2 table

    Sparse signal recovery by β„“q\ell_q minimization under restricted isometry property

    Full text link
    In the context of compressed sensing, the nonconvex β„“q\ell_q minimization with 0<q<10<q<1 has been studied in recent years. In this paper, by generalizing the sharp bound for β„“1\ell_1 minimization of Cai and Zhang, we show that the condition Ξ΄(sq+1)k<1sqβˆ’2+1\delta_{(s^q+1)k}<\dfrac{1}{\sqrt{s^{q-2}+1}} in terms of \emph{restricted isometry constant (RIC)} can guarantee the exact recovery of kk-sparse signals in noiseless case and the stable recovery of approximately kk-sparse signals in noisy case by β„“q\ell_q minimization. This result is more general than the sharp bound for β„“1\ell_1 minimization when the order of RIC is greater than 2k2k and illustrates the fact that a better approximation to β„“0\ell_0 minimization is provided by β„“q\ell_q minimization than that provided by β„“1\ell_1 minimization

    Reconstruction Guarantee Analysis of Binary Measurement Matrices Based on Girth

    Full text link
    Binary 0-1 measurement matrices, especially those from coding theory, were introduced to compressed sensing (CS) recently. Good measurement matrices with preferred properties, e.g., the restricted isometry property (RIP) and nullspace property (NSP), have no known general ways to be efficiently checked. Khajehnejad \emph{et al.} made use of \emph{girth} to certify the good performances of sparse binary measurement matrices. In this paper, we examine the performance of binary measurement matrices with uniform column weight and arbitrary girth under basis pursuit. Explicit sufficient conditions of exact reconstruction %only including γ\gamma and gg are obtained, which improve the previous results derived from RIP for any girth gg and results from NSP when g/2g/2 is odd. Moreover, we derive explicit l1/l1l_1/l_1, l2/l1l_2/l_1 and l∞/l1l_\infty/l_1 sparse approximation guarantees. These results further show that large girth has positive impacts on the performance of binary measurement matrices under basis pursuit, and the binary parity-check matrices of good LDPC codes are important candidates of measurement matrices.Comment: accepted by IEEE ISIT 201
    • …
    corecore